52 research outputs found

    3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

    Get PDF
    This paper presents a novel solution of vehicle occlusion and 3D measurement for traffic monitoring by data fusion from multiple stationary cameras. Comparing with single camera based conventional methods in traffic monitoring, our approach fuses video data from different viewpoints into a common probability fusion map (PFM) and extracts targets. The proposed PFM concept is efficient to handle and fuse data in order to estimate the probability of vehicle appearance, which is verified to be more reliable than single camera solution by real outdoor experiments. An AMF based shadowing modeling algorithm is also proposed in this paper in order to remove shadows on the road area and extract the proper vehicle regions

    Towards Unifying Diffusion Models for Probabilistic Spatio-Temporal Graph Learning

    Full text link
    Spatio-temporal graph learning is a fundamental problem in the Web of Things era, which enables a plethora of Web applications such as smart cities, human mobility and climate analysis. Existing approaches tackle different learning tasks independently, tailoring their models to unique task characteristics. These methods, however, fall short of modeling intrinsic uncertainties in the spatio-temporal data. Meanwhile, their specialized designs limit their universality as general spatio-temporal learning solutions. In this paper, we propose to model the learning tasks in a unified perspective, viewing them as predictions based on conditional information with shared spatio-temporal patterns. Based on this proposal, we introduce Unified Spatio-Temporal Diffusion Models (USTD) to address the tasks uniformly within the uncertainty-aware diffusion framework. USTD is holistically designed, comprising a shared spatio-temporal encoder and attention-based denoising networks that are task-specific. The shared encoder, optimized by a pre-training strategy, effectively captures conditional spatio-temporal patterns. The denoising networks, utilizing both cross- and self-attention, integrate conditional dependencies and generate predictions. Opting for forecasting and kriging as downstream tasks, we design Gated Attention (SGA) and Temporal Gated Attention (TGA) for each task, with different emphases on the spatial and temporal dimensions, respectively. By combining the advantages of deterministic encoders and probabilistic diffusion models, USTD achieves state-of-the-art performances compared to deterministic and probabilistic baselines in both tasks, while also providing valuable uncertainty estimates

    Graph Neural Processes for Spatio-Temporal Extrapolation

    Full text link
    We study the task of spatio-temporal extrapolation that generates data at target locations from surrounding contexts in a graph. This task is crucial as sensors that collect data are sparsely deployed, resulting in a lack of fine-grained information due to high deployment and maintenance costs. Existing methods either use learning-based models like Neural Networks or statistical approaches like Gaussian Processes for this task. However, the former lacks uncertainty estimates and the latter fails to capture complex spatial and temporal correlations effectively. To address these issues, we propose Spatio-Temporal Graph Neural Processes (STGNP), a neural latent variable model which commands these capabilities simultaneously. Specifically, we first learn deterministic spatio-temporal representations by stacking layers of causal convolutions and cross-set graph neural networks. Then, we learn latent variables for target locations through vertical latent state transitions along layers and obtain extrapolations. Importantly during the transitions, we propose Graph Bayesian Aggregation (GBA), a Bayesian graph aggregator that aggregates contexts considering uncertainties in context data and graph structure. Extensive experiments show that STGNP has desirable properties such as uncertainty estimates and strong learning capabilities, and achieves state-of-the-art results by a clear margin.Comment: SIGKDD 202

    Optimal real-time power dispatch of power grid with wind energy forecasting under extreme weather

    Get PDF
    With breakthroughs in the power electronics industry, the stability and rapid power regulation of wind power generation have been improved. Its power generation technology is becoming more and more mature. However, there are still weaknesses in the operation and control of power systems under the influence of extreme weather events, especially in real-time power dispatch. To optimally distribute the power of the regulation resources in a more stable manner, a wind energy forecasting-based power dispatch model with time-control intervals optimization is proposed. In this model, the outage of the wind energy under extreme weather is analyzed by an autoregressive integrated moving average model (ARIMA). Additionally, the other regulation resources are used to balance the corresponding wind power drop and power mismatch. Meanwhile, an algorithm names weighted mean of vectors (INFO) is employed to solve the real-time power dispatch and minimize the power deviation between the power command and real output. Lastly, the performance of the proposed optimal real-time power dispatch is executed in a simulation model with ten regulation resources. The simulation tests show that the combination of ARIMA and INFO can effectively improve the power control performance of the PD-WEF system

    Motion detection from a moving observer using pure feature matching

    No full text
    Motion detection from a moving observer has been a very important technique for 3D dynamical image analysis, especially in the research of obstacle detection and tracking for autonomous driving systems and driver supporting systems. Because of the continuously changing background, detecting the real moving objects has become very difficult and always employs optical flow method to measure the flow vectors difference between background and the moving objects. Unfortunately, it has to pay a huge calculation cost to obtain the accurate optical flow vectors. A new motion detecting method based on camera 3D-motion analysis is proposed in this paper. With the special motion feature of the on-board camera and its Focus of Expansion (FOE), our method can detect the real moving objects from the continuously changing background scene by simply using the pure feature matching between two adjacent frames. The camera 3D-motion can theoretically be determined in our method by using only three pairs of matching points, which make it faster and more efficient for real-time applications. Experiments on real outdoor road scene show the accuracy and efficiency of ou
    • …
    corecore